120 research outputs found

    Speed and accuracy of dyslexic versus typical word recognition: an eye-movement investigation

    Get PDF
    Developmental dyslexia is often characterized by a dual deficit in both word recognition accuracy and general processing speed. While previous research into dyslexic word recognition may have suffered from speed-accuracy trade-off, the present study employed a novel eye tracking task that is less prone to such confounds. Participants (10 dyslexics and 12 controls) were asked to look at real word stimuli, and to ignore simultaneously presented non-word stimuli, while their eye-movements were recorded. Improvements in word recognition accuracy over time were modeled in terms of a continuous non-linear function. The words’ rhyme consistency and the non-words’ lexicality (unpronounceable, pronounceable, pseudohomophone) were manipulated within-subjects. Speed related measures derived from the model fits confirmed generally slower processing in dyslexics, and showed a rhyme consistency effect in both dyslexics and controls. In terms of overall error rate, dyslexics (but not controls) performed less accurately on rhyme-inconsistent words, suggesting a representational deficit for such words in dyslexics. Interestingly, neither group showed a pseudohomophone effect in speed or accuracy, which might call the task-independent pervasiveness of this effect into question. The present results illustrate the importance of distinguishing between speed- vs. accuracy related effects for our understanding of dyslexic word recognition

    Cross-domain priming from mathematics to relative-clause attachment: a visual-world study in French

    Get PDF
    Human language processing must rely on a certain degree of abstraction, as we can produce and understand sentences that we have never produced or heard before. One way to establish syntactic abstraction is by investigating structural priming. Structural priming has been shown to be effective within a cognitive domain, in the present case, the linguistic domain. But does priming also work across different domains? In line with previous experiments, we investigated cross-domain structural priming from mathematical expressions to linguistic structures with respect to relative clause attachment in French (e.g., la fille du professeur qui habitait à Paris/the daughter of the teacher who lived in Paris). Testing priming in French is particularly interesting because it will extend earlier results established for English to a language where the baseline for relative clause attachment preferences is different form English: in English, relative clauses (RCs) tend to be attached to the local noun phrase (low attachment) while in French there is a preference for high attachment of relative clauses to the first noun phrase (NP). Moreover, in contrast to earlier studies, we applied an online-technique (visual world eye-tracking). Our results confirm cross-domain priming from mathematics to linguistic structures in French. Most interestingly, different from less mathematically adept participants, we found that in mathematically skilled participants, the effect emerged very early on (at the beginning of the relative clause in the speech stream) and is also present later (at the end of the relative clause). In line with previous findings, our experiment suggests that mathematics and language share aspects of syntactic structure at a very high-level of abstraction

    Referential and visual cues to structural choice in visually situated sentence production

    Get PDF
    We investigated how conceptually informative (referent preview) and conceptually uninformative (pointer to referent’s location) visual cues affect structural choice during production of English transitive sentences. Cueing the Agent or the Patient prior to presenting the target-event reliably predicted the likelihood of selecting this referent as the sentential Subject, triggering, correspondingly, the choice between active and passive voice. Importantly, there was no difference in the magnitude of the general Cueing effect between the informative and uninformative cueing conditions, suggesting that attentionally driven structural selection relies on a direct automatic mapping mechanism from attentional focus to the Subject’s position in a sentence. This mechanism is, therefore, independent of accessing conceptual, and possibly lexical, information about the cued referent provided by referent preview

    Negation in context: evidence from the visual world paradigm

    Get PDF
    Literature assumes that negation is more difficult to understand than affirmation, but this might depend on the pragmatic context. The goal of this paper is to show that pragmatic knowledge modulates the unfolding processing of negation due to the previous activation of the negated situation. To test this, we used the visual world paradigm. In this task, we presented affirmative (e.g., her dad was rich) and negative sentences (e.g., her dad was not poor) while viewing two images of the affirmed and denied entities. The critical sentence in each item was preceded by one of three types of contexts: an inconsistent context (e.g., She supposed that her dad had little savings) that activates the negated situation (a poor man), a consistent context (e.g., She supposed that her dad had enough savings) that activates the actual situation (a rich man), or a neutral context (e.g., her dad lived on the other side of town) that activates neither of the two models previously suggested. The results corroborated our hypothesis. Pragmatics is implicated in the unfolding processing of negation. We found an increase in fixations on the target compared to the baseline for negative sentences at 800 ms in the neutral context, 600 ms in the inconsistent context, and 1450 ms in the consistent context. Thus, when the negated situation has been previously introduced via an inconsistent context, negation is facilitated

    TEST: A Tropic, Embodied, and Situated Theory of Cognition

    Get PDF
    TEST is a novel taxonomy of knowledge representations based on three distinct hierarchically organized representational features: Tropism, Embodiment, and Situatedness. Tropic representational features reflect constraints of the physical world on the agent’s ability to form, reactivate, and enrich embodied (i.e., resulting from the agent’s bodily constraints) conceptual representations embedded in situated contexts. The proposed hierarchy entails that representations can, in principle, have tropic features without necessarily having situated and/or embodied features. On the other hand, representations that are situated and/or embodied are likely to be simultaneously tropic. Hence while we propose tropism as the most general term, the hierarchical relationship between embodiment and situatedness is more on a par, such that the dominance of one component over the other relies on the distinction between offline storage vs. online generation as well as on representation-specific properties

    Pupillary responses to affective words in bilinguals’ first versus second language

    Get PDF
    Late bilinguals often report less emotional involvement in their second language, a phenomenon called reduced emotional resonance in L2. The present study measured pupil dilation in response to high- versus low-arousing words (e.g., riot vs. swamp) in German-English and Finnish-English late bilinguals, both in their first and in their second language. A third sample of English monolingual speakers (tested only in English) served as a control group. To improve on previous research, we controlled for lexical confounds such as length, frequency, emotional valence, and abstractness–both within and across languages. Results showed no appreciable differences in post-trial word recognition judgements (98% recognition on average), but reliably stronger pupillary effects of the arousal manipulation when stimuli were presented in participants’ first rather than second language. This supports the notion of reduced emotional resonance in L2. Our findings are unlikely to be due to differences in stimulus-specific control variables or to potential word-recognition difficulties in participants’ second language. Linguistic relatedness between first and second language (German-English vs. Finnish-English) was also not found to have a modulating influence

    Inner voice experiences during processing of direct and indirect speech

    Get PDF
    In this chapter, we review a new body of research on language processing, focusing particularly on the distinction between direct speech (e.g., Mary said, “This dress is absolutely beautiful!”) and indirect speech (e.g., Mary said that the dress was absolutely beautiful). First, we will discuss an important pragmatic distinction between the two reporting styles and highlight the consequences of this distinction for prosodic processing. While direct speech provides vivid demonstrations of the reported speech act (informing recipients about how something was said by another speaker), indirect speech is more descriptive of what was said by the reported speaker. This is clearly reflected in differential prosodic contours for the two reporting styles during speaking: Direct speech is typically delivered with a more variable and expressive prosody, whereas indirect speech tends to be used in combination with a more neutral and less expressive prosody. Next, we will introduce recent evidence in support of an “inner voice” during language comprehension, especially during silent reading of direct speech quotations. We present and discuss a coherent stream of research using a wide range of methods, including speech analysis, functional magnetic resonance imaging (fMRI), and eye-tracking. The findings are discussed in relation to overt (or ‘explicit’) prosodic characteristics that are likely to be observed when direct and indirect speech are used in spoken utterances (such as during oral reading). Indeed, the research we review here makes a convincing case for the hypothesis that recipients spontaneously activate voice-related mental representations during silent reading, and that such an “inner voice” is particularly pronounced when reading direct speech quotations (and much less so for indirect speech). The corresponding brain activation patterns, as well as correlations between silent and oral reading data, furthermore suggest that this “inner voice” during silent reading is related to the supra-segmental and temporal characteristics of actual speech. For ease of comparison, we shall dub this phenomenon of an “inner voice” (particularly during silent reading of direct speech) simulated implicit prosody to distinguish it from default implicit prosody that is commonly discussed in relation to syntactic ambiguity resolution. In the final part of this chapter, we will attempt to specify the relation between simulated and default implicit prosody. Based on the existing empirical data and our own theoretical conclusions, we will discuss the similarities and discrepancies between the two not necessarily mutually exclusive terms. We hope that our discussion will motivate a new surge of interdisciplinary research that will not only extend our knowledge of prosodic processes during reading, but could potentially unify the two phenomena in a single theoretical framework

    Motor (but not auditory) attention affects syntactic choice

    Get PDF
    Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker’s attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker’s syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain

    The lexical boost effect is not diagnostic of lexically-specific syntactic representations

    Get PDF
    Structural priming implies that speakers/listeners unknowingly re-use syntactic structure over subsequent utterances. Previous research found that structural priming is reliably enhanced when lexical content is repeated (lexical boost effect). A widely held assumption is that structure-licensing heads enjoy a privileged role in lexically boosting structural priming. The present comprehension-to-production priming experiments investigated whether head-constituents (verbs) versus non-head constituents (argument nouns) contribute differently to boosting ditransitive structure priming in English. Experiment 1 showed that lexical boosts from repeated agent or recipient nouns (and to a lesser extent, repeated theme nouns) were comparable to those from repeated verbs. Experiments 2 and 3 found that increasing numbers of content words shared between primes and targets led to increasing magnitudes of structural priming (again, with no ‘special’ contribution of verb-repetition). We conclude that lexical boost effects are not diagnostic of lexically-specific syntactic representations, even though such representations are supported by other types of evidence

    Investigating the foreign language effect as a mitigating influence on the ‘optimality bias’ in moral judgements

    Get PDF
    Bilinguals often display reduced emotional resonance their second language (L2) and therefore tend to be less prone to decision-making biases in their L2 (e.g., Costa et al. in Cognition 130(2):236–254, 2014a, PLoS One 9(4):1–7, 2014b)—a phenomenon coined Foreign Language Effect (FLE). The present pre-registered experiments investigated whether FLE can mitigate a special case of cognitive bias, called optimality bias, which occurs when observers erroneously blame actors for making “suboptimal” choices, even when there was not sufficient information available for the actor to identify the best choice (De Freitas and Johnson in J Exp Soc Psychol 79:149–163, 2018. https://doi.org/10.1016/j.jesp.2018.07.011). In Experiment 1, L1 English speakers (N = 63) were compared to L2 English speakers from various L1 backgrounds (N = 56). In Experiment 2, we compared Finnish bilinguals completing the study in either Finnish (L1, N = 103) or English (L2, N = 108). Participants read a vignette describing the same tragic outcome resulting from either an optimal or suboptimal choice made by a hypothetical actor with insufficient knowledge. Their blame attributions were measured using a 4-item scale. A strong optimality bias was observed; participants assigned significantly more blame in the suboptimal choice conditions, despite being told that the actor did not know which choice was best. However, no clear interaction with language was found. In Experiment 1, bilinguals gave reliably higher blame scores than natives. In Experiment 2, no clear influence of target language was found, but the results suggested that the FLE is actually more detrimental than helpful in the domain of blame attribution. Future research should investigate the benefits of emotional involvement in blame attribution, including factors such as empathy and perspective-taking
    corecore